Adaptive Tree CPDs in Max-Product Belief Propagation
نویسنده
چکیده
In general, the problem of computing the maximum a posteriori (MAP) assignment in a Bayesian network is computationally intractable. In some cases, such as in tree-structured networks, inference can be done efficiently and exactly. However, there are still practical challenges when trying to do inference in networks containing variables with large cardinalities. In this case, representing and manipulating the local conditional probability densities (CPDs) may be cumbersome with standard techniques. Since one is then typically forced to resort to approximations in the CPD representation, exact inference becomes intractable even in networks with otherwise tractable structure. I present an adaptive CPD representation suitable for max-product inference that is able to adjust its complexity as inference progresses, offering a means of performing exact inference in networks with tractable structure but prohibitively large variable cardinalities. I show results for a series of experiments on Hidden Markov Models, showing that my technique gives a speed improvement over standard Viterbi decoding on networks with large state spaces, then I show that my method is able to successfully perform inference on problems where inference would otherwise be intractable.
منابع مشابه
Cooled and Relaxed Survey Propagation for MRFs
We describe a new algorithm, Relaxed Survey Propagation (RSP), for finding MAP configurations in Markov random fields. We compare its performance with state-of-the-art algorithms including the max-product belief propagation, its sequential tree-reweighted variant, residual (sum-product) belief propagation, and tree-structured expectation propagation. We show that it outperforms all approaches f...
متن کاملMAP Estimation, Linear Programming and Belief Propagation with Convex Free Energies
Finding the most probable assignment (MAP) in a general graphical model is known to be NP hard but good approximations have been attained with max-product belief propagation (BP) and its variants. In particular, it is known that using BP on a single-cycle graph or tree reweighted BP on an arbitrary graph will give the MAP solution if the beliefs have no ties. In this paper we extend the setting...
متن کاملFinding the M Most Probable Configurations using Loopy Belief Propagation
Loopy belief propagation (BP) has been successfully used in a number of difficult graphical models to find the most probable configuration of the hidden variables. In applications ranging from protein folding to image analysis one would like to find not just the best configuration but rather the top M . While this problem has been solved using the junction tree formalism, in many real world pro...
متن کاملFast Inference and Learning with Sparse Belief Propagation
Even in trees, exact probabilistic inference can be expensive when the cardinality of the variables is large. This is especially troublesome for learning, because many standard estimation techniques, such as EM and conditional maximum likelihood, require calling an inference algorithm many times. In max-product inference, a standard heuristic for controlling this complexity in linear chains is ...
متن کاملKAIST CS 774 Markov Random Field : Theory and Application Sep 10 , 2009 Lecture 3
In this lecture, we study the Belief propagation algorithm(BP) and the Max Product algorithm(MP). Last lecture reminds us of that in MRF, computing the marginal probabilities of random variables and Maximum A Posteriori(MAP) assignment is important. The Belief Propagation algorithm is a popular algorithm that is used to compute marginal probability of random variables. Max Product algorithm is ...
متن کامل